Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Dimension reduction (DR) algorithms have proven to be extremely useful for gaining insight into large-scale high-dimensional datasets, particularly finding clusters in transcriptomic data. The initial phase of these DR methods often involves converting the original high-dimensional data into a graph. In this graph, each edge represents the similarity or dissimilarity between pairs of data points. However, this graph is frequently suboptimal due to unreliable high-dimensional distances and the limited information extracted from the high-dimensional data. This problem is exacerbated as the dataset size increases. If we reduce the size of the dataset by selecting points for a specific sections of the embeddings, the clusters observed through DR are more separable since the extracted subgraphs are more reliable. In this paper, we introduce LocalMAP, a new dimensionality reduction algorithm that dynamically and locally adjusts the graph to address this challenge. By dynamically extracting subgraphs and updating the graph on-the-fly, LocalMAP is capable of identifying and separating real clusters within the data that other DR methods may overlook or combine. We demonstrate the benefits of LocalMAP through a case study on biological datasets, highlighting its utility in helping users more accurately identify clusters for real-world problems.more » « lessFree, publicly-accessible full text available April 11, 2026
-
Sparsity is a central aspect of interpretability in machine learning. Typically, sparsity is measured in terms of the size of a model globally, such as the number of variables it uses. However, this notion of sparsity is not particularly relevant for decision-making; someone subjected to a decision does not care about variables that do not contribute to the decision. In this work, we dramatically expand a notion of decision sparsity called the Sparse Explanation Value(SEV) so that its explanations are more meaningful. SEV considers movement along a hypercube towards a reference point. By allowing flexibility in that reference and by considering how distances along the hypercube translate to distances in feature space, we can derive sparser and more meaningful explanations for various types of function classes. We present cluster-based SEV and its variant tree-based SEV, introduce a method that improves credibility of explanations, and propose algorithms that optimize decision sparsity in machine learning models.more » « lessFree, publicly-accessible full text available December 1, 2025
-
Sparsity is a central aspect of interpretability in machine learning. Typically, sparsity is measured in terms of the size of a model globally, such as the number of variables it uses. However, this notion of sparsity is not particularly relevant for decision-making; someone subjected to a decision does not care about variables that do not contribute to the decision. In this work, we dramatically expand a notion of decision sparsity called the Sparse Explanation Value(SEV) so that its explanations are more meaningful. SEV considers movement along a hypercube towards a reference point. By allowing flexibility in that reference and by considering how distances along the hypercube translate to distances in feature space, we can derive sparser and more meaningful explanations for various types of function classes. We present cluster-based SEV and its variant tree-based SEV, introduce a method that improves credibility of explanations, and propose algorithms that optimize decision sparsity in machine learning models.more » « lessFree, publicly-accessible full text available December 1, 2025
-
Sparsity is a central aspect of interpretability in machine learning. Typically, sparsity is measured in terms of the size of a model globally, such as the number of variables it uses. However, this notion of sparsity is not particularly relevant for decision-making; someone subjected to a decision does not care about variables that do not contribute to the decision. In this work, we dramatically expand a notion of decision sparsity called the Sparse Explanation Value(SEV) so that its explanations are more meaningful. SEV considers movement along a hypercube towards a reference point. By allowing flexibility in that reference and by considering how distances along the hypercube translate to distances in feature space, we can derive sparser and more meaningful explanations for various types of function classes. We present cluster-based SEV and its variant tree-based SEV, introduce a method that improves credibility of explanations, and propose algorithms that optimize decision sparsity in machine learning models.more » « lessFree, publicly-accessible full text available December 1, 2025
-
Abstract Chalcogenide perovskites have emerged as promising semiconductor materials due to their appealing properties, including tunable bandgaps, high absorption coefficients, reasonable carrier lifetimes and mobilities, excellent chemical stability, and environmentally benign nature. However, beyond the well‐studied BaZrS3, reports on chalcogenide perovskite thin films with diverse compositions are scarce. In this study, the realization of four different types of chalcogenide perovskite thin films with controlled phases, through CS2annealing of amorphous chalcogenide precursor films deposited by pulsed laser deposition (PLD), is reported. This achievement is guided by a thorough theoretical investigation of the phase stability of chalcogenide perovskites. Upon crystallization in the distorted perovskite phase, all materials exhibit photoluminescence (PL) with peak positions in the visible range, consistent with their expected bandgap values. However, the full‐width‐at‐half‐maximum (FWHM) of the PL spectra varies significantly across these materials, ranging from 99 meV for SrHfS3to 231 meV for BaHfS3. The difference is attributed to the difference in kinetic barriers between local structural motifs for the Sr and Ba compounds. The findings underscore the promise of chalcogenide perovskite thin films as an alternative to traditional halide perovskites for optoelectronic applications, while highlighting the challenges in optimizing their synthesis and performance.more » « less
An official website of the United States government
